agentic ai
A Meta agentic AI sparked a security incident by acting without permission
Maybe think twice before letting an AI take over all your tech? According to the publication, an employee used an in-house agentic AI to analyze a query from a second employee on an internal forum. The AI agent posted a response to the second employee with advice even though the first person did not direct it to do so. The second employee took the agent's recommended action, sparking a domino effect that led to some engineers having access to Meta systems that they shouldn't have permission to see. A representative from the company confirmed the incident to and said that no user data was mishandled.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence (1.00)
Nurturing agentic AI beyond the toddler stage
The promise of autonomous agentic AI requires significant changes in the governance landscape. Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child's first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub.
- North America > United States > Massachusetts (0.05)
- North America > United States > California (0.05)
- Retail (0.48)
- Leisure & Entertainment (0.48)
- Information Technology (0.48)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.96)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.38)
Building a strong data infrastructure for AI agent success
As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.
What is Moltbook? The strange new social media site for AI bots
Some people are sceptical about whether the socialising of bots is a sign of what is coming with the rise of agentic AI. Some people are sceptical about whether the socialising of bots is a sign of what is coming with the rise of agentic AI. A bit like Reddit for artificial intelligence, Moltbook allows AI agents - bots built by humans - to post and interact with each other. On social media, people often accuse each other of being bots, but what happens when an entire social network is designed for AI agents to use? Moltbook is a site where the AI agents - bots built by humans - can post and interact with each other. It is designed to look like Reddit, with subreddits on different topics and upvoting.
- Europe > Ukraine (0.06)
- Oceania > Australia (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > Middle East > Iran (0.05)
- Leisure & Entertainment > Sports (0.72)
- Media > News (0.71)
- Information Technology (0.51)
- Government > Regional Government (0.51)
- North America (0.05)
- Asia > Singapore (0.05)
- Education (0.69)
- Health & Medicine (0.49)
- Leisure & Entertainment > Sports > Soccer (0.30)
AWS CEO Matt Garman Doesn't Think AI Should Replace Junior Devs
The head of Amazon Web Services has big plans to offer AI tools to businesses, but says that replacing coders with AI is "a non-starter for anyone who's trying to build a long-term company." Amid the breathless coverage and relentless AI hype of recent years, one of the world's biggest tech companies--Amazon--has been notably absent. Matt Garman, the CEO of Amazon Web Services, is looking to change that. At the recent AWS re:Invent conference, Garman announced a bunch of frontier AI models, as well as a tool designed to let AWS customers build models of their own. That tool, Nova Forge, allows companies to engage in what's known as custom pretraining--adding their data in the process of building a base model--which should allow for vastly more customized models that suit a given company's needs. Sure, it doesn't quite have the sexiness of a Sora 2 announcement, but that's not Garman's goal: He's less interested in mass consumer use of AI and more interested in enterprise solutions that'll integrate AI into all of AWS's offerings--and have a material impact on a corporate P&L. For this week's episode of, I caught up with Garman after AWS re:Invent to talk about what the company announced, whether he feels behind in the AI race, how he thinks about managing huge teams (and managing internal dissent), and why he's not convinced that AI is (or should be) the great job thief of our era. We always start these conversations with some very quick questions, like a warmup. If AWS had a mascot, what would it be? We have a big S3 bucket sometimes that goes around, so we'll call it that. Sorry, what is an S3 bucket? An S3 bucket is like a thing that you store your S3 objects in, but we actually have a large foam big bucket that walks around and actually looks like a paint bucket. So you do have a mascot. Well, S3 has a bucket, it has a mascot. It's probably the closest we have, and I like it. What's the most expensive mistake you've ever made? Personally, the most expensive mistake I ever made was playing basketball too long and I tore my Achilles. So that cost me about nine months of being able to walk. I probably should have known that into my thirties I was well past basketball-playing age.
- North America > United States > California (0.14)
- South America (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Communications > Mobile (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.46)
David vs. Goliath: Can Small Models Win Big with Agentic AI in Hardware Design?
Shankar, Shashwat, Pandey, Subhranshu, Mochahari, Innocent Dengkhw, Mali, Bhabesh, Chowdhury, Animesh Basak, Bhattacharjee, Sukanta, Karfa, Chandan
Large Language Model(LLM) inference demands massive compute and energy, making domain-specific tasks expensive and unsustainable. As foundation models keep scaling, we ask: Is bigger always better for hardware design? Our work tests this by evaluating Small Language Models coupled with a curated agentic AI framework on NVIDIA's Comprehensive Verilog Design Problems(CVDP) benchmark. Results show that agentic workflows: through task decomposition, iterative feedback, and correction - not only unlock near-LLM performance at a fraction of the cost but also create learning opportunities for agents, paving the way for efficient, adaptive solutions in complex design tasks.
- North America > United States (0.04)
- Asia > India > Assam > Guwahati (0.04)
- Africa > Mali (0.04)
STRIDE: A Systematic Framework for Selecting AI Modalities -- Agentic AI, AI Assistants, or LLM Calls
Asthana, Shubhi, Zhang, Bing, DeLuca, Chad, Mahindru, Ruchi, Patel, Hima
The rapid shift from stateless large language models (LLMs) to autonomous, goal-driven agents raises a central question: When is agentic AI truly necessary? While agents enable multi-step reasoning, persistent memory, and tool orchestration, deploying them indiscriminately leads to higher cost, complexity, and risk. We present STRIDE (Systematic Task Reasoning Intelligence Deployment Evaluator), a framework that provides principled recommendations for selecting between three modalities: (i) direct LLM calls, (ii) guided AI assistants, and (iii) fully autonomous agentic AI. STRIDE integrates structured task decomposition, dynamism attribution, and self-reflection requirement analysis to produce an Agentic Suitability Score, ensuring that full agentic autonomy is reserved for tasks with inherent dynamism or evolving context. Evaluated across 30 real-world tasks spanning SRE, compliance, and enterprise automation, STRIDE achieved 92% accuracy in modality selection, reduced unnecessary agent deployments by 45%, and cut resource costs by 37%. Expert validation over six months in SRE and compliance domains confirmed its practical utility, with domain specialists agreeing that STRIDE effectively distinguishes between tasks requiring simple LLM calls, guided assistants, or full agentic autonomy. This work reframes agent adoption as a necessity-driven design decision, ensuring autonomy is applied only when its benefits justify the costs.
- North America > United States (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > India (0.04)
Semantic Trading: Agentic AI for Clustering and Relationship Discovery in Prediction Markets
Capponi, Agostino, Gliozzo, Alfio, Zhu, Brian
Prediction markets allow users to trade on outcomes of real-world events, but are prone to fragmentation with overlapping questions, implicit equivalences, and hidden contradictions across markets. We present an agentic AI pipeline that autonomously (i) clusters markets into coherent topical groups using natural-language understanding over contract text and metadata, and (ii) identifies within-cluster market pairs whose resolved outcomes exhibit strong dependence, including "same-outcome" (correlated) and "different-outcome" (anti-correlated) relationships. Using a historical dataset of resolved markets on Poly-market, we evaluate the accuracy of the agent's relational predictions. We then synthesize discovered relationships into a simple trading strategy to quantify how discovered relationships translate into actionable strategies. Results show that agent-identified relationships have around 60-70% accuracy, and their induced trading strategies have an average return of 20% over week-long horizons, highlighting the ability of agen-tic AI and large language models to uncover latent semantic structure within prediction markets.
- North America > Canada (0.05)
- North America > United States > Iowa (0.04)
Agentifying Agentic AI
Dignum, Virginia, Dignum, Frank
Agentic AI seeks to endow systems with sustained autonomy, reasoning, and interaction capabilities. To realize this vision, its assumptions about agency must be complemented by explicit models of cognition, cooperation, and governance. This paper argues that the conceptual tools developed within the Autonomous Agents and Multi-Agent Systems (AAMAS) community, such as BDI architectures, communication protocols, mechanism design, and institutional modelling, provide precisely such a foundation. By aligning adaptive, data-driven approaches with structured models of reasoning and coordination, we outline a path toward agentic systems that are not only capable and flexible, but also transparent, cooperative, and accountable. The result is a perspective on agency that bridges formal theory and practical autonomy.
- North America > United States > Virginia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.46)